Search Results: "iari"

9 November 2016

Russ Allbery: Some thoughts on the US elections

I apparently am not going to get anything done today until I write this. Some thoughts, in no particular order. I don't have any profound conclusions. I'm honestly pretty upset. And pretty scared. But we have to talk to each other. And we have to listen to each other. And we have to persaude each other. And we have to be willing to be persuaded. And please go tell someone this morning that you have their back.

1 November 2016

Jaldhar Vyas: Sal Mubarak 2073!

Wishing every one a happy Gujarati New Year, Vikram Samvat 2073 named Kilaka and hoping the upcoming year will be yuuge for you. These next couple of paragraphs are totally not an excuse for why it will take a few more days for me to reach seven blog posts. Reading reports about Diwali in the American press, I see a bit of confusion whether Diwali is one day or five. Well, technically it is just one (Sunday 30th this year.) but there are a number of other observances around it which could be classed as subsidiaries if you want to look at it that way. The season commenced last Wednesday with Rama Ekadashi. (where the Gujarati name is different I'll use that and put the Sanskrit name in parentheses.) That's a fast day and therefore not much fun. Thursday was Vagh Barash (vyaghra dvadashi) which as the name suggests is something to do with tigers but we don't in my experience particularly do anything special that day. Friday, things began in earnest with Dhan Terash (dhana trayodashi) when Lakshmi the Goddess of prosperity is worshipped. It is also a good day to buy gold. Saturday was Kali Chaudash (Kali Chaturdashi or Naraka Chaturdashi) On this day many Gujarati families including mine worship their Kuladevi (patron Goddess of the family) even if She is not an aspect of Kali. (Others observe this on the Ashtami of Navaratri.) The day is also associated with the God Hanuman. Some people say it is His Jayanti (birthday) though we observe it in Chaitra (March-April.) It is also the best day for learning mantras and I initiated a couple of people including my son into a mantra I know. Sunday was Diwali (Deepavali) proper. As a Brahmana I spent much of the day signing blessings in the account books of shopkeepers. Well, nowadays only a few old people have actual account books so usually the print out a spreadsheet and I sign that. But home is where the main action is. Lights are lit, fireworks are set off, and prayers are offered to Lakshmi. But most important of all, this is the day good boys and girls get presents. Unfortunately I have nothing interesting to report; just the usual utilitarian items of clothing. Fireworks by the way are technically illegal in New Jersey not that that ever stopped anyone from getting them. The past few years, Jersey City has attempted to compromise by allowing a big public fireworks display. Although it was nice and sunny all day, by nighttime we had torrential rain and the firework display got washed out. So I'm glad I rebelled against the system with my small cache of sparklers. Today (or yesterday by the time this gets posted.) was the Gujarati New Years Day. There is also the commemoration of the time the God Krishna lifted up Mt Govardhan with one finger which every mandir emulates by making an annakuta or mountain of food. Tuesday is Bhai Beeja (Yama Dvitiya in Sanskrit or Bhai Duj in Hindi) when sisters cook a meal for their brothers. My son is also going to make something (read: microwave something) for his sister. So those are the five days of Diwali. Though many will not consider it to be truly over until this Saturday, the lucky day of Labh Pancham (Labha panchami.) And if I still haven't managed to write seven blog posts by then, there is always Deva Diwali...

31 August 2016

Enrico Zini: Links for September 2016

A Few Useful Mental Tools from Richard Feynman [archive]
These tricks show Feynman taking the method of thought he learned in pure science and applying it to the more mundane topics most of us have to deal with every day.
Pasta [archive]
A comprehensive introduction to pasta, to keep at hand in case I meet someone who has little familiarity with it.
MPTP: One Designer Drug and Serendipity [archive]
Abstract: Through an unlikely series of coincidences and fortunate accidents, the development of Parkinson s disease in several illicit drug users was traced to their use of a meperidine analog contaminated with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP). The discovery of a chemical capable of producing animal models of the disease has revitalized research efforts and resulted in important new information. The serendipitous finding also prompted consideration of what changes seem advisable if designer drugs are to be dealt with more efficaciously.
The Debunking Handbook: now freely available for download
The Debunking Handbook, a guide to debunking misinformation, is now freely available to download. Although there is a great deal of psychological research on misinformation, there's no summary of the literature that offers practical guidelines on the most effective ways of reducing the influence of myths.
Faulty neon light jams radio appliances [archive]
An apparent interference source began plaguing wireless vehicle key fobs, cell phones, and other wireless electronics. Key fob owners found they could not open or start their vehicles remotely until their vehicles were towed at least a block away, nor were they able to call for help on their cell phones when problems occurred
Calvin & Muad'Dib
Calvin & Hobbes with text taken from Frank Herbert's Dune. It's been around since 2013 and I consistently found it moving and deep.
When Birds Attack - Bike Helmet Hacks [archive]
Australian magpies attacking cyclists has propted several creative adaptations, including attaching an afro wig to the bike helmet.

24 July 2016

Russ Allbery: Review: The Run of His Life

Review: The Run of His Life, by Jeffrey Toobin
Publisher: Random House
Copyright: 1996, 1997
Printing: 2015
ISBN: 0-307-82916-2
Format: Kindle
Pages: 498
The O.J. Simpson trial needs little introduction to anyone who lived through it in the United States, but a brief summary for those who didn't. O.J. Simpson is a Hall of Fame football player and one of the best running backs to ever play the game. He's also black, which is very relevant much of what later happened. After he retired from professional play, he became a television football commentator and a spokesperson for various companies (particularly Hertz, a car rental business). In 1994, he was arrested for the murder of two people: his ex-wife, Nicole Brown Simpson, and Ron Goldman (a friend of Nicole's). The arrest happened after a bizarre low-speed police chase across Los Angeles in a white Bronco that was broadcast live on network television. The media turned the resulting criminal trial into a reality TV show, with live cable television broadcasts of all of the court proceedings. After nearly a full year of trial (with the jury sequestered for nine months more on that later), a mostly black jury returned a verdict of not guilty after a mere four hours of deliberation. Following the criminal trial, in an extremely unusual legal proceeding, Simpson was found civilly liable for Ron Goldman's death in a lawsuit brought by his family. Bizarre events surrounding the case continued long afterwards. A book titled If I Did It (with "if" in very tiny letters on the cover) was published, ghost-written but allegedly with Simpson's input and cooperation, and was widely considered a confession. Another legal judgment let the Goldman family get all the profits from that book's publication. In an unrelated (but also bizarre) incident in Las Vegas, Simpson was later arrested for kidnapping and armed robbery and is currently in prison until at least 2017. It is almost impossible to have lived through the O.J. Simpson trial in the United States and not have formed some opinion on it. I was in college and without a TV set at the time, and even I watched some of the live trial coverage. Reactions to the trial were extremely racially polarized, as you might have guessed. A lot of black people believed at the time that Simpson was innocent (probably fewer now, given subsequent events). A lot of white people thought he was obviously guilty and was let off by a black jury for racial reasons. My personal opinion, prior to reading this book, was a common "third way" among white liberals: Simpson almost certainly committed the murders, but the racist Los Angeles police department decided to frame him for a crime that he did commit by trying to make the evidence stronger. That's a legitimate reason in the US justice system for finding someone innocent: the state has an obligation to follow correct procedure and treat the defendant fairly in order to get a conviction. I have a strong bias towards trusting juries; frequently, it seems that the media second-guesses the outcome of a case that makes perfect sense as soon as you see all the information the jury had (or didn't have). Toobin's book changed my mind. Perhaps because I wasn't watching all of the coverage, I was greatly underestimating the level of incompetence and bad decision-making by everyone involved: the prosecution, the defense, the police, the jury, and the judge. This court case was a disaster from start to finish; no one involved comes away looking good. Simpson was clearly guilty given the evidence presented, but the case was so badly mishandled that it gave the jury room to reach the wrong verdict. (It's telling that, in the far better managed subsequent civil case, the jury had no trouble reaching a guilty verdict.) The Run of His Life is a very detailed examination of the entire Simpson case, from the night of the murder through the end of the trial and (in an epilogue) the civil case. Toobin was himself involved in the media firestorm, breaking some early news of the defense's decision to focus on race in The New Yorker and then involved throughout the trial as a legal analyst, and he makes it clear when he becomes part of the story. But despite that, this book felt objective to me. There are tons of direct quotes, lots of clear description of the evidence, underlying interviews with many of the people involved to source statements in the book, and a comprehensive approach to the facts. I think Toobin is a bit baffled by the black reaction to the case, and that felt like a gap in the comprehensiveness and the one place where he might be accused of falling back on stereotypes and easy judgments. But other than hole, Toobin applies his criticism even-handedly and devastatingly to all parties. I won't go into all the details of how Toobin changed my mind. It was a cumulative effect across the whole book, and if you're curious, I do recommend reading it. A lot was the detailed discussion of the forensic evidence, which was undermined for the jury at trial but looks very solid outside the hothouse of the case. But there is one critical piece that I would hope would be handled differently today, twenty years later, than it was by the prosecutors in that case: Simpson's history of domestic violence against Nicole. With what we now know about patterns of domestic abuse, the escalation to murder looks entirely unsurprising. And that history of domestic abuse was exceedingly well-documented: multiple external witnesses, police reports, and one actual prior conviction for spousal abuse (for which Simpson did "community service" that was basically a joke). The prosecution did a very poor job of establishing this history and the jury discounted it. That was a huge mistake by both parties. I'll mention one other critical collection of facts that Toobin explains well and that contradicted my previous impression of the case: the relationship between Simpson and the police. Today, in the era of Black Lives Matter, the routine abuse of black Americans by the police is more widely known. At the time of the murders, it was less recognized among white Americans, although black Americans certainly knew about it. But even in 1994, the Los Angeles police department was notorious as one of the most corrupt and racist big-city police departments in the United States. This is the police department that beat Rodney King. Mark Fuhrman, one of the police officers involved in the case (although not that significantly, despite his role at the trial), was clearly racist and had no business being a police officer. It was therefore entirely believable that these people would have decided to frame a black man for a murder he actually committed. What Toobin argues, quite persuasively and with quite a lot of evidence, is that this analysis may make sense given the racial tensions in Los Angeles but ignores another critical characteristic of Los Angeles politics, namely a deference to celebrity. Prior to this trial, O.J. Simpson largely followed the path of many black athletes who become broadly popular in white America: underplaying race and focusing on his personal celebrity and connections. (Toobin records a quote from Simpson earlier in his life that perfectly captures this approach: "I'm not black, I'm O.J.") Simpson spent more time with white businessmen than the black inhabitants of central Los Angeles. And, more to the point, the police treated him as a celebrity, not as a black man. Toobin takes some time to chronicle the remarkable history of deference and familiarity that the police showed Simpson. He regularly invited police officers to his house for parties. The police had a long history of largely ignoring or downplaying his abuse of his wife, including not arresting him in situations that clearly seemed to call for that, showing a remarkable amount of deference to his side of the story, not pursuing clear violations of the court judgment after his one conviction for spousal abuse, and never showing much inclination to believe or protect Nicole. Even on the night of the murder, they started following a standard playbook for giving a celebrity advance warning of investigations that might involve them before the news media found out about them. It seems clear, given the evidence that Toobin collected, that the racist Los Angeles police didn't focus that animus at Simpson, a wealthy celebrity living in Brentwood. He wasn't a black man in their eyes; he was a rich Hall of Fame football player and a friend. This obviously raises the question of how the jury could return an innocent verdict. Toobin provides plenty of material to analyze that question from multiple angles in his detailed account of the case, but I can tell you my conclusion: Judge Lance Ito did a horrifically incompetent job of managing the case. He let the lawyers wander all over the case, interacted bizarrely with the media coverage (and was part of letting the media turn it into a daytime drama), was not crisp or clear about his standards of evidence and admissibility, and, perhaps worst of all, let the case meander on at incredible length. With a fully sequestered jury allowed only brief conjugal visits and no media contact (not even bookstore shopping!). Quite a lot of anger was focused on the jury after the acquittal, and I do think they reached the wrong conclusion and had all the information they would have needed to reach the correct one. But Toobin touches on something that I think would be very hard to comprehend without having lived through it. The jury and alternate pool essentially lived in prison for nine months, with guards and very strict rules about contact with the outside world, in a country where compensation for jury duty is almost nonexistent. There were a lot of other factors behind their decision, including racial tensions and the sheer pressure from judging a celebrity case about which everyone has an opinion, but I think it's nearly impossible to underestimate the psychological tension and stress from being locked up with random other people under armed guard for three quarters of a year. It's hard for jury members to do an exhaustive and careful deliberation in a typical trial that takes a week and doesn't involve sequestration. People want to get back to their lives and families. I can only imagine the state I would be in after nine months of this, or how poor psychological shape I would be in to make a careful and considered decision. Similarly, for those who condemned the jury for profiting via books and media appearances after the trial, the current compensation for jurors is $15 per day (not hour). I believe at the time it was around $5 per day. There are a few employers who will pay full salary for the entire jury service, but only a few; many cap the length at a few weeks, and some employers treat all jury duty as unpaid leave. The only legal requirement for employers in the United States is that employees that serve on a jury have their job held for them to return to, but compensation is pathetic, not tied to minimum wage, and employers do not have to supplement it. I'm much less inclined to blame the jurors than the system that badly mistreated them. As you can probably tell from the length of this review, I found The Run of His Life fascinating. If I had followed the whole saga more closely at the time, this may have been old material, but I think my vague impressions and patchwork of assumptions were more typical than not among people who were around for the trial but didn't invest a lot of effort into following it. If you are like me, and you have any interest in the case or the details of how the US criminal justice system works, this is a fascinating case study, and Toobin does a great job with it. Recommended. Rating: 8 out of 10

7 July 2016

Daniel Pocock: Can you help with monitoring packages in Debian and Ubuntu?

Debian (and consequently Ubuntu) contains a range of extraordinarily useful monitoring packages. I've been maintaining several of them at a basic level but as more of my time is taken up by free Real-Time Communications software, I haven't been able to follow the latest upstream releases for all of the other packages I maintain. The versions we are distributing now still serve their purpose well, but as some people would like newer versions, I don't want to stand in the way. Monitoring packages are for everyone. Even if you are a single user or developer with a single desktop or laptop and no servers, you may still find some of these packages useful. For example, after doing an apt-get upgrade or dist-upgrade, it can be extremely beneficial to look at your logs in LogAnalyzer with all the errors and warnings colour-coded so you can see at a glance whether your upgrade broke anything. If you are testing new software before a release or trying to troubleshoot erratic behavior, this type of colour-coded feedback can also help you focus on possible problems without the eyestrain of tailing a logfile. LogAnalyzer How to help A good first step is simply looking over the packages maintained by the pkg-monitoring group and discovering whether any of them are useful for your own needs. You may be familiar with alternatives that exist in Debian, if so, feel free to comment on whether you believe any of these packages should be dropped by cre
ating a wishlist bug against the package concerned. The next step is joining the pkg-monitoring mailing list. If you are a Debian Developer or Debian Maintainer with upload rights already, you can join the group on alioth. If you are not at that level yet, you are still very welcome to test new versions of the packages and upload them on mentors.debian.net and then join the mentors mailing list to look for a member of the community who can help review your work and sponsor an upload for you. Each of the packages should have a README.source file in the repository explaining more about how the package is maintained. Familiarity with Git is essential. Note that all of the packages keep their debian/* artifacts in a branch called debian/sid while the master branch tracks the upstream repository. You can clone the Debian package repositories for any of these projects from alioth and build them yourself, try building packages of new upstream versions and try to investigate any of the bug reports submitted to Debian. Some of the bugs may have already been fixed by upstream changes and can be marked appropriately. Integrating your monitoring systems Two particular packages I would like to highlight are ganglia-nagios-bridge and syslog-nagios-bridge. They are not exclusively for Nagios and could also be used with Icinga or other monitoring dashboards. The key benefit of these packages is that all the alerting is consolidated in a single platform, Nagios, which is able to display them in a colour-coded dashboard and intelligently send notifications to people in a manner that is fully configurable. If you haven't integrated your monitoring systems already, these projects provide a simple and lightweight way to start doing so.

19 May 2016

Matthew Garrett: Your project's RCS history affects ease of contribution (or: don't squash PRs)

Github recently introduced the option to squash commits on merge, and even before then several projects requested that contributors squash their commits after review but before merge. This is a terrible idea that makes it more difficult for people to contribute to projects.

I'm spending today working on reworking some code to integrate with a new feature that was just integrated into Kubernetes. The PR in question was absolutely fine, but just before it was merged the entire commit history was squashed down to a single commit at the request of the reviewer. This single commit contains type declarations, the functionality itself, the integration of that functionality into the scheduler, the client code and a large pile of autogenerated code.

I've got some familiarity with Kubernetes, but even then this commit is difficult for me to read. It doesn't tell a story. I can't see its growth. Looking at a single hunk of this diff doesn't tell me whether it's infrastructural or part of the integration. Given time I can (and have) figured it out, but it's an unnecessary waste of effort that could have gone towards something else. For someone who's less used to working on large projects, it'd be even worse. I'm paid to deal with this. For someone who isn't, the probability that they'll give up and do something else entirely is even greater.

I don't want to pick on Kubernetes here - the fact that this Github feature exists makes it clear that a lot of people feel that this kind of merge is a good idea. And there are certainly cases where squashing commits makes sense. Commits that add broken code and which are immediately followed by a series of "Make this work" commits also impair readability and distract from the narrative that your RCS history should present, and Github present this feature as a way to get rid of them. But that ends up being a false dichotomy. A history that looks like "Commit", "Revert Commit", "Revert Revert Commit", "Fix broken revert", "Revert fix broken revert" is a bad history, as is a history that looks like "Add 20,000 line feature A", "Add 20,000 line feature B".

When you're crafting commits for merge, think about your commit history as a textbook. Start with the building blocks of your feature and make them one commit. Build your functionality on top of them in another. Tie that functionality into the core project and make another commit. Add client support. Add docs. Include your tests. Allow someone to follow the growth of your feature over time, with each commit being a chapter of that story. And never, ever, put autogenerated code in the same commit as an actual functional change.

People can't contribute to your project unless they can understand your code. Writing clear, well commented code is a big part of that. But so is showing the evolution of your features in an understandable way. Make sure your RCS history shows that, otherwise people will go and find another project that doesn't make them feel frustrated.

(Edit to add: Sarah Sharp wrote on the same topic a couple of years ago)

comment count unavailable comments

2 May 2016

Russ Allbery: Review: The Girl with the Dragon Tattoo

Review: The Girl with the Dragon Tattoo, by Stieg Larsson
Translator: Reg Keeland
Series: Millennium #1
Publisher: Vintage Crime
Copyright: 2005, 2008
Printing: June 2009
ISBN: 0-307-47347-3
Format: Mass market
Pages: 644
As The Girl with the Dragon Tattoo opens, Mikael Blomkvist is losing a criminal libel suit in Swedish court. His magazine, Millennium, published his hard-hitting piece of investigative journalism that purported to reveal sketchy arms deals and financial crimes by Hans-Erik Wennerstr m, a major Swedish businessman. But the underlying evidence didn't hold up, and Blomkvist could offer no real defense at trial. The result is a short prison stint for him (postponed several months into this book) and serious economic danger for Millennium. Lisbeth Salander is a (very) freelance investigator for Milton Security. Her specialty is research and background checks: remarkably thorough, dispassionate, and comprehensive. She's almost impossible to talk to, tending to meet nearly all questions with stony silence, but Dragan Armansky, the CEO of Milton Security, has taken her partly under his wing. She, and Milton Security, were hired by a lawyer named Dirch Frode to do a comprehensive background check on Mikael Blomkvist, which she and Dragan present near the start of the book. The reason, as the reader discovers in a few more chapters, is that Frode's employer wants to offer Blomkvist a very strange job. Over forty years ago, Harriet Vanger, scion of one of Sweden's richest industrial families, disappeared. Her uncle, Henrik Vanger, has been obsessed with her disappearance ever since, but in forty years of investigation has never been able to discover what happened to her. There are some possibilities for how her body could have been transported off the island the Vangers (mostly) lived, and live, on, but motive and suspects are still complete unknowns. Vanger wants Blomkvist to try his hand under the cover of writing a book about the Vanger family. Payment is generous, but even more compelling is Henrik Vanger's offer to give Blomkvist documented, defensible evidence against Wennerstr m at the end of the year. The Girl with the Dragon Tattoo (the original Swedish title is M n som hatar kvinnor, "Men who hate women") is the first of three mystery novels written at the very end of Stieg Larsson's life, all published posthumously. They made quite a splash when they were published: won multiple awards, sold millions of copies, and have resulted in four movies to date. I've had a copy of the book sitting around for a while and finally picked it up when in the mood for something a bit different. A major disclaimer up front: I read very little crime and mystery fiction. Every genre has its own conventions and patterns, and regular genre readers often look for different things than people new to that genre. My review is from a somewhat outside and inexperienced perspective, which may not be useful for regular genre readers. I'm also a US reader, reading the book in translation. It appears to be a very good translation, but it was also quite obvious to me that The Girl with the Dragon Tattoo was written from a slightly different set of cultural assumptions than I brought to the book. This is one of the merits of reading books from other cultures in translation. It can be eye-opening, and can carry some of the same thrill as science fiction or fantasy, to hit the parts of the book that question your assumptions. But it can also be hard to tell whether some striking aspect of a book is due to a genre convention I wasn't familiar with, a Swedish cultural assumption that I don't share, or just the personal style of the author. A few things do leap out as cultural differences. Blomkvist has to spend a few months in prison in the middle of this book, and that entire experience is completely foreign to an American understanding of what prison is like. The degradation, violence, and awfulness that are synonymous with prison for an American are almost entirely absent. He even enjoys the experience as quiet time to focus on writing a history of the Vangers (Blomkvist early on decides to take his cover story seriously, since he doubts he'll make any inroads into the mystery of Harriet's disappearance but can at least get a book out of it). It's a minor element in the book, glossed over in a few pages, but it's certainly eye-opening for how minimum security prison could be structured in a civilized country. Similarly, as an American reader, I was struck by how hard Larsson has to work to ruin Salander's life. Although much of the book is written from Blomkvist's perspective (in tight third person), Lisbeth Salander is the titular girl with the dragon tattoo and becomes more and more involved in the story as it develops. The story Larsson wanted to tell requires that she be in a very precarious position legally and socially. In the US, this would be relatively easy, particularly for someone who acts like Salander does. In Sweden, Larsson has to go to monumental efforts to find ways for Salander to credibly fall through Sweden's comprehensive social safety net, and still mostly relies on Salander's complete refusal to assist or comply with any form of authority or support. I've read a lot about differences in policies around social support between the US and Scandinavian countries, but I've rarely read something that drove the point home more clearly than the amount of work a novelist has to go to in order to mess up their protagonist's life in Sweden. The actual plot is slow-moving and as much about the psychology of the characters as it is about the mystery. The reader gets inside the thoughts of the characters occasionally, but Larsson shows far more than tells and leaves it to the reader to draw more general conclusions. Blomkvist's relationship with his long-time partner and Millennium co-founder is an excellent example: so much is left unstated that I would have expected other books to lay down in black and white, and the characters seem surprisingly comfortable with ambiguity. (Some of this may be my genre unfamiliarity; SFF tends to be straightforward to a fault, and more literary fiction is more willing to embrace ambiguous relationships.) While the mystery of Harriet's disappearance forms the backbone of the story, rather more pages are spent on Blomkvist navigating the emotional waters of the near-collapse of his career and business, his principles around investigation and journalism, and the murky waters of the Vanger's deeply dysfunctional family. Harriet's disappearance is something of a locked room mystery. The day she disappeared, a huge crash closed the only bridge from the island to the mainland, both limiting suspects and raising significant questions about why her body was never found on the island. It's also forty years into the past, so Blomkvist has to rely on Henrik Vanger's obsessive archives, old photographs, and old police reports. I found the way it unfolded to be quite satisfying: there are just enough clues to let Blomkvist credibly untangle things with some hard work and research, but they're obscure enough to make it plausible that previous investigators missed them. Through most of this novel, I wasn't sure what I thought of it. I have a personal interest in Blomkvist's journalistic focus wrongdoing by rich financiers but I had trouble warming to Blomkvist himself. He's a very passive, inward character, who spends a lot of the early book reacting to things that are happening to him. Salander is more dynamic and honestly more likable, but she's also deeply messed up, self-destructive, and does some viciously awful things in this book. And the first half of the book is very slow: lots of long conversations, lots of character introduction, and lots of Blomkvist wandering somewhat aimlessly. It's only when Larsson gets the two protagonists together that I thought the book started to click. Salander sees Blomkvist's merits more clearly than the reader can, I think. I also need to give a substantial warning: The Girl with the Dragon Tattoo is a very violent novel, and a lot of that violence is sexual. By mid-book, Blomkvist realizes that Harriet's disappearance is somehow linked with a serial killer whose trademark is horrific, sexualized symbolism drawn from Leviticus. There is a lot of rape here, including revenge rape by a protagonist. If that sort of thing is likely to bother you, you may want to steer way clear. That said, despite the slow pace, the nauseating subject matter, the occasionally very questionable ethics of protagonists, and a twist of the knife at the very end of the novel that I thought was gratuitously nasty on Larsson's part and wasn't the conclusion I wanted, I found myself enjoying this. It has a different pace and a different flavor than what I normally read, the characters are deep and complex enough to play off each other in satisfying ways, and Salander is oddly compelling to read about. Given the length, it's a substantial investment of time, but I don't regret reading it, and I'm quite tempted to read the sequel. I'm not sure this is the sort of book I can recommend (or not recommend) given my lack of familiarity with the genre, but I think US readers might get an additional layer of enjoyment out of seeing how different of a slant the Swedish setting puts on some of the stock elements of a crime novel. Followed by The Girl Who Played with Fire. Rating: 7 out of 10

25 April 2016

Gunnar Wolf: Passover / Pesaj, a secular viewpoint, a different viewpoint... And slowly becoming history!

As many of you know (where "you" is "people reading this who actually know who I am), I come from a secular Jewish family. Although we have some religious (even very religious) relatives, neither my parents nor my grandparents were religious ever. Not that spirituality wasn't important to them My grandparents both went deep into understanding by and for themselves the different spiritual issues that came to their mind, and that's one of the traits I most remember about them while I was growing up. But formal, organized religion was never much welcome in the family; again, each of us had their own ways to concile our needs and fears with what we thought, read and understood. This week is the Jewish celebration of Passover, or Pesaj as we call it (for which Passover is a direct translation, as Pesaj refers to the act of the angel of death passing over the houses of the sons of Israel during the tenth plague in Egypt; in Spanish, the name would be Pascua, which rather refers to the ritual sacrifice of a lamb that was done in the days of the great temple)... Anyway, I like giving context to what I write, but it always takes me off the main topic I want to share. Back to my family. I am a third-generation member of the Hashomer Hatzair zionist socialist youth movement; my grandmother was among the early Hashomer Hatzair members in Poland in the 1920s, both my parents were active in the Mexico ken in the 1950s-1960s (in fact, they met and first interacted there), and I was a member from 1984 until 1996. It was also thanks to Hashomer that my wife and I met, and if my children get to have any kind of Jewish contact in their lifes, I hope it will be through Hashomer as well. Hashomer is a secular, nationalist movement. A youth movement with over a century of history might seem like a contradiction. Over the years, of course, it has changed many details, but as far as I know, the essence is still there, and I hope it will continue to be so for good: Helping shape integral people, with identification with Judaism as a nation and not as a religion; keeping our cultural traits, but interpreting them liberally, and aligned with a view towards the common good Socialism, no matter how the concept seems pass nowadays. Colectivism. Inclusion. Peaceful coexistence with our neighbours. Acceptance of the different. I could write pages on how I learnt about each of them during my years in Hashomer, how such concepts striked me as completely different as what the broader Jewish community I grew up in understood and related to them... But again, I am steering off the topic I want to pursue. Every year, we used to have a third Seder (that is, a third Passover ceremony) at Hashomer. A third one, because as tradition mandates two ceremonies to be held outside Israel, and a movement comprised of people aged between 7 and 21, having a seder competing with the familiar one would not be too successful, we held a celebration on a following day. But it would never be the same as the "formal" Pesaj: For the Seder, the Jewish tradition mandates following the Hagada The Seder always follows a predetermined order (literally, Seder means order), and the Hagad (which means both legend and a story that is spoken; you can find full Hagadot online if you want to see what rites are followed; I found a seemingly well done, modern, Hebrew and English version, a more traditional one, in Hebrew and Spanish, and Wikipedia has a description including its parts and rites) is, quite understandably, full with religious words, praises for God, and... Well, many things that are not in line with Hashomer's values. How could we be a secular movement and have a big celebration full with praises for God? How could we yearn for life in the kibbutz distance from the true agricultural meaning of the celebration? The members of Hashomer Hatzair repeatedly took on the task (or, as many would see it, the heresy) of adapting the Hagada to follow their worldview, updated it for the twentieth century, had it more palatable for our peculiarities. Yesterday, when we had our Seder, I saw my father still has together with the other, more traditional Hagadot we use two copies of the Hagad he used at Hashomer Hatzair's third Seder. And they are not only beautiful works showing what they, as very young activists thought and made solemn, but over time, they are becoming historic items by themselves (one when my parents were still young janijim, in 1956, and one when they were starting to have responsabilities and were non-formal teachers or path-showers, madrijim, in 1959). He also had a copy of the Hagad we used in the 1980s when I was at Hashomer; this last one was (sadly?) not done by us as members of Hashomer, but prepared by a larger group between Hashomer Hatzair and the Mexican friends of Israeli's associated left wing party, Mapam. This last one, I don't know which year it was prepared and published on, but I remember following it in our ceremony. So, I asked him to borrow me the three little books, almost leaflets, and scanned them to be put online. Of course, there is no formal licensing information in them, much less explicit authorship information, but they are meant to be shared So I took the liberty of uploading them to the Internet Archive, tagging them as CC-0 licensed. And if you are interested in them, flowing over and back between Spanish and Hebrew, with many beautiful texts adapted for them from various sources, illustrated by our own with the usual heroic, socialist-inspired style, and lovingly hand-reproduced using the adequate technology for their day... Here they are: I really enjoyed the time I took scanning and forming them, reading some passages, imagining ourselves and my parents as youngsters, remembering the beautiful work we did at such a great organization. I hope this brings this joy to others like it did to me. , . Once shomer, always shomer.

7 January 2016

Daniel Pocock: Do you own your phone or does it own you?

Have you started thinking about new year's resolutions for 2016? Back to the gym or giving up sugary drinks? Many new year's resolutions have a health theme. Unless you have a heroin addiction, there may not be anything else in your life that is more addictive and has potentially more impact on your health and quality of life than your mobile phone. Almost every week there is some new report about the negative impact of phone use on rest or leisure time. Children are particularly at risk and evidence strongly suggests their grades at school are tanking as a consequence. Can you imagine your life changing for the better if you switched off your mobile phone or left it at home for one day per week in 2016? If you have children, can you think of anything more powerful than the example you set yourself to help them stay in control of their phones? Children have a remarkable ability to emulate the bad habits they observe in their parents. Are you in control? Turning it off is a powerful act of showing who is in charge. If you feel you can't live without it, then you are putting your life in the hands of the people who expect an immediate answer of their calls, your phone company and the Silicon Valley executives who make all those apps you can't stop using. As security expert Jacob Appelbaum puts it, cell phones are tracking devices that also happen to make phone calls. Isn't that a chilling thought to reflect on the next time you give one as Christmas gift? For your health, your children and your bank balance Not so long ago we were having lunch in a pizza restaurant in Luzern, a picturesque lakeside town at the base of the Swiss Alps. Luzern is a popular first stop for tourists from all around the world. A Korean family came along and sat at the table next to us. After ordering their food, they all immediately took out their mobile devices and sat there in complete silence, the mother and father, a girl of eight and a boy of five, oblivious to the world around them and even each other, tapping and swiping for the next ten minutes until their food arrived. We wanted to say hello to them, I joked that I should beep first, initiating communication with the sound of a text message notification. Is this how all holidays will be in future? Is it how all families will spend time together? Can you imagine your grandchildren and their children sharing a meal like this in the year 2050 or beyond? Which gadgets does Bond bring to Switzerland? On Her Majesty's Secret Service is one of the more memorable Bond movies for its spectacular setting in the Swiss Alps, the location now transformed into a mountain-top revolving restaurant visited by thousands of tourists every day with a comfortable cable car service and hiking trails with breathtaking views that never become boring. Can you imagine Bond leaving behind his gun and his skis and visiting Switzerland with a smartphone instead? Eating a pizza with one hand while using the fingertips of the other to operate an app for making drone strikes on villains, swiping through Tinder for a new girl to replace the one who died (from boredom) in his previous "adventure" and letting his gelati melt while engrossed in a downhill ski or motorcycle game in all the glory of a 5.7" 24-bit colour display? Of course its absurd. Would you want to live like that yourself? We see more and more of it in people who are supposedly in Switzerland on the trip of a lifetime. Would you tolerate it in a movie? The mobile phone industry has paid big money to have their technology appear on the silver screen but audience feedback shows people are frustrated with movies that plaster the contents of text messages across the screen every few minutes; hopefully Bond movies will continue to plaster bullets and blood across the screen instead. Time for freedom How would you live for a day or a weekend or an entire holiday without your mobile phone? There are many small frustrations you may experience but the biggest one and the indirect cause of many other problems you will experience may be the inability to tell the time. Many people today have stopped wearing a watch, relying instead upon their mobile phone to tell the time. Without either a phone or a watch, frustration is not far away. If you feel apprehension just at the thought of leaving your phone at home, the lack of a watch may be a subconcious factor behind your hesitation. Trying is better than reading Many articles and blogs give opinions about how to buy a watch, how much to spend and what you can wear it with. Don't spend a lot of time reading any of it, if you don't know where to start, simply go down to the local high street or mall and try them. Start with the most glamorous and expensive models from Swiss manufacturers, as these are what everything else is compared to and then perhaps proceed to look more widely. While Swiss brands tend to sell through the stores, vendors on Amazon and eBay now distribute a range of watches from manufacturers in Japan, China and other locations, such as Orient and Invicta, at a fraction of the price of those in the stores. You still need to try a few first to identify your preferred style and case size though. Google can also turn up many options for different budgets.

Copying or competition? Similarity of Invicta (from Amazon) and Rolex Submariner You may not know whether you want a watch that is manually wound, automatically wound or battery operated. Buying a low-cost automatic model online could be a good way to familiarize yourself before buying anything serious. Mechanical watches have a smoother and more elegant second-hand movement and will survive the next Carrington event but may come to grief around magnets - a brief encounter with a low-cost de-gausser fixes that. Is it smart to buy a smart watch? If you genuinely want to have the feeling of complete freedom and control over technology, you may want to think twice about buying a smart watch. While it may be interesting to own and experiment with it some of the time, being free from your phone means being free from other electronic technology too. If you do go for a smart watch (and there are many valid reasons for trying one some of the time), maybe make it a second (or third) watch. Smart watches are likely to be controversial for some time to come due to their impact in schools (where mobile phones are usually banned) and various privacy factors. Help those around you achieve phone freedom in 2016 There will be further blogs on this theme during 2016, each looking at the pressures people face when with or without the mobile phone. As a developer of communications technology myself, you may be surprised to see me encouraging people not to use it every waking minute. Working on this technology makes me more conscious of its impact on those around me and society in general. A powerful factor to consider when talking about any communications technology is the presence of peer pressure and the behavior of those around you. Going phone-free may involve helping them to consider taking control too. Helping them out with a new watch as a gift (be careful to seek advice on the style that they are likely to prefer or ensure the purchase can be exchanged) may be an interesting way to help them engage with the idea and every time they look at the time, they may also be reminded of your concern for their freedom.

30 November 2015

Mark Brown: Unconscious biases

Matthew Garrett s recent very good response to Eric Raymond s recent post opposing inclusiveness efforts in free software reminded me of something I ve been noticing more and more often: a very substantial proportion of the female developers I encounter working on the kernel are from non-European cultures where I (and I expect most people from western cultures) lack familiarity with the gender associations of all but the most common and familiar names. This could be happening for a lot of reasons it could be better entry paths to kernel development in those cultures (though my experience visiting companies in the relevant countries makes me question that), it could be that the sample sizes are so regrettably small that this really is just anecdote but I worry that some of what s going on is that the cultural differences are happening to mask and address some of the unconscious barriers that get thrown up.

20 October 2015

Russ Allbery: Review: The Oathbound

Review: The Oathbound, by Mercedes Lackey
Series: Vows and Honor #1
Publisher: DAW
Copyright: July 1988
ISBN: 0-88677-414-4
Format: Mass market
Pages: 302
This book warrants a bit of explanation. Before Arrows of the Queen, before Valdemar (at least in terms of publication dates), came Tarma and Kethry short stories. I don't know if they were always intended to be set in the same world as Valdemar; if not, they were quickly included. But they came from another part of the world and a slightly different sub-genre. While the first two Valdemar trilogies were largely coming-of-age fantasy, Tarma and Kethry are itinerant sword-and-sorcery adventures featuring two women with a soul bond: the conventionally attractive, aristocratic mage Kethry, and the celibate, goddess-sworn swordswoman Tarma. Their first story was published, appropriately, in Marion Zimmer Bradley's Swords and Sorceress III. This is the first book about Tarma and Kethry. It's a fix-up novel: shorter stories, bridged and re-edited, and glued together with some additional material. And it does not contain the first Tarma and Kethry story. As mentioned in my earlier Valdemar reviews, this is a re-read, but it's been something like twenty years since I previously read the whole Valdemar corpus (as it was at the time; I'll probably re-read everything I have on hand, but it's grown considerably, and I may not chase down the rest of it). One of the things I'd forgotten is how oddly, from a novel reader's perspective, the Tarma and Kethry stories were collected. Knowing what I know now about publishing, I assume Swords and Sorceress III was still in print at the time The Oathbound was published, or the rights weren't available for some other reason, so their first story had to be omitted. Whatever the reason, The Oathbound starts with a jarring gap that's no less irritating in this re-read than it was originally. Also as is becoming typical for this series, I remembered a lot more world-building and character development than is actually present in at least this first book. In this case, I strongly suspect most of that characterization is in Oathbreakers, which I remember as being more of a coherent single story and less of a fix-up of puzzle and adventure stories with scant time for character growth. I'll be able to test my memory shortly. What we do get is Kethry's reconciliation of her past, a brief look at the Shin'a'in and the depth of Tarma and Kethry's mutual oath (unfortunately told more than shown), the introduction of Warrl (again, a relationship that will grow a great deal more depth later), and then some typical sword and sorcery episodes: a locked room mystery, a caravan guard adventure about which I'll have more to say later, and two rather unpleasant encounters with a demon. The material is bridged enough that it has a vague novel-like shape, but the bones of the underlying short stories are pretty obvious. One can tell this isn't really a novel even without the tell of a narrative recap in later chapters of events that you'd just read earlier in the same book. What we also get is rather a lot of rape, and one episode of seriously unpleasant "justice." A drawback of early Lackey is that her villains are pure evil. My not entirely trustworthy memory tells me that this moderates over time, but early stories tend to feature villains completely devoid of redeeming qualities. In this book alone one gets to choose between the rapist pedophile, the rapist lord, the rapist bandit, and the rapist demon who had been doing extensive research in Jack Chalker novels. You'll notice a theme. Most of the rape happens off camera, but I was still thoroughly sick of it by the end of the book. This was already a cliched motivation tactic when these stories were written. Worse, as with the end of Arrow's Flight, the protagonists don't seem to be above a bit of "turnabout is fair play." When you're dealing with rape as a primary plot motivation, that goes about as badly as you might expect. The final episode here involves a confrontation that Tarma and Kethry brought entirely on themselves through some rather despicable actions, and from which they should have taken a lesson about why civilized societies have criminal justice systems. Unfortunately, despite an ethical priest who is mostly played for mild amusement, no one in the book seems to have drawn that rather obvious conclusion. This, too, I recall as getting better as the series goes along and Lackey matures as a writer, but that only helps marginally with the early books. Some time after the publication of The Oathbound and Oathbreakers, something (presumably the rights situation) changed. Oathblood was published in 1998 and includes not only the first Tarma and Kethry story but also several of the short stories that make up this book, in (I assume) something closer to their original form. That makes The Oathbound somewhat pointless and entirely skippable. I re-read it first because that's how I first approached the series many years ago, and (to be honest) because I'd forgotten how much was reprinted in Oathblood. I'd advise a new reader to skip it entirely, start with the short stories in Oathblood, and then read Oathbreakers before reading the final novella. You'd miss the demon stories, but that's probably for the best. I'm complaining a lot about this book, but that's partly from familiarity. If you can stomach the rape and one stunningly unethical protagonist decision, the stories that make it up are solid and enjoyable, and the dynamic between Tarma and Kethry is always a lot of fun (and gets even better when Warrl is added to the mix). I think my favorite was the locked room mystery. It's significantly spoiled by knowing the ending, and it has little deeper significance, but it's a classic sort unembellished, unapologetic sword-and-sorcery tale that's hard to come by in books. But since it too is reprinted (in a better form) in Oathblood, there's no point in reading it here. Followed by Oathbreakers. Rating: 6 out of 10

6 October 2015

Norbert Preining: Craft Beer Kanazawa 2015

Last weekend the yearly Craft Beer Kanazawa Festival took place in central Kanazawa. This year 14 different producers brought about 80 different kind of beers for us to taste. Compared with 6 years ago when I came to Japan and Japan was still more or less Kirin-Asahi-Sapporo country without any distinguishable taste, the situation as improved vastly, and we can now enjoy lots of excellent local beers!
beer-festival-1 Returning from a trip to Swansea and a conference in Fukuoka, I arrived at Kanazawa train station and went directly to the beer festival. A great welcome back in Kanazawa, but due to excessive sleep deprivation and the feeling of finally I want to come home , I only enjoyed 6 beers from 6 different producers. In the gardens behind the Shinoki Cultural Center lots of small tents with beer and food were set up. Lots of tables and chairs were also available, but most people enjoyed flocking around in the grass around the tents. What a difference to last year s rainy and cold beer festival!
beer-festival-5 This year s producers were (in order from left to right, page links according to language): beer-festival-3 With only 6 beers to my avail (due to ticket system), I choose the ones I don t have nearby. Mind that the following comments are purely personal and do not define a quality standards  I just say what I like from worst to best: beer-festival-2 A great beer festival and I am looking for next years festival to try a few more. In the mean time I will stock up beers at home, so that I have always a good Yoho Brewery beer at hand! Enjoy

12 August 2015

Daniel Pocock: Recording live events like a pro (part 2: video)

In the first blog in this series, we looked at how to capture audio effectively for a range of different events. While recording audio appears less complicated than video, it remains fundamental to a good recording. For some types of event, like a speech or a debate, you can have quite a bad video with poor lighting and other defects but people will still be able to watch it if the audio is good. Therefore, if you haven't already looked at the previous blog, please do so now. As mentioned in the earlier blog, many people now have high quality equipment for recording both audio and video and a wide range of opportunities to use it, whether it is a talk at a conference, a wedding or making a Kickstarter video. The right camera for video recording The fundamental piece of equipment is the camera itself. You may have a DSLR camera that can record video or maybe you have a proper handheld video camera. The leading DSLR cameras, combined with a good lens, make higher quality recordings than many handheld video cameras. Unfortunately, although you pay good money to buy a very well engineered DSLR that could easily record many hours of video, most DSLRs are crippled to record a maximum of 30 minutes in one recording. This issue and some workarounds are discussed later in this blog. If you don't have any camera at all you need to think carefully about which type to buy. If you are only going to use it once you may want to consider renting or borrowing or asking for other people attending the event to bring whatever cameras they have to help make multiple recordings (the crowdsourcing solution). If you are a very keen photographer then you will probably have a preference for a DSLR. Accessories Don't just look at the cost of your camera and conclude that is all the budget you need. For professional quality video recording, you will almost certainly need some accessories. You may find they are overpriced at the retail store where you bought your camera, but you still need some of them, so have a look online.
Recording a talk at a free software event with a Nikon D800 on a very basic tripod with Rode VideoMic Pro, headphones (white cable) and external power (black cable) If you want to capture audio with the camera and record it in the video file (discussed in more detail below), you will need to purchase a microphone that mounts on the camera. The built-in microphones on cameras are often quite bad, even on the most expensive cameras. If you are just using the built-in microphone for reference audio (to help with time synchronization when you combine different audio files with the video later) then the built-in microphone may be acceptable. Camera audio is discussed in more detail below. If your camera has a headphone socket, get some headphones for listening to the audio. Make sure you have multiple memory cards. Look carefully at the speed of the memory cards, slow ones are cheaper but they can't keep up with the speed of writing 1080p video. At a minimum, you should aim to buy memory cards that can handle one or two days worth of data for whatever it is you do. A tripod is essential for most types of video. If you use a particularly heavy camera or lens or if you are outdoors and it may be windy you will need a heavier tripod for stability. For video, it is very useful to have a tripod with a handle for panning left and right but if the camera will be stationary for the whole recording then the handle is not essential. Carrying at least one spare battery is another smart move. On one visit to the Inca Trail in Peru, we observed another member of our group hiking up and down the Andes with a brand new DSLR that they couldn't use because the battery was flat. For extended periods of recording, batteries will not be sufficient and you will need to purchase a mains power supply (PSU). These are available for most types of DSLR and video camera. The camera vendors typically design cameras with unusual power sockets so that you can only use a very specific and heavily overpriced PSU from the same company. Don't forget a surge protector too. There are various smartphone apps that allow you to remotely control the camera from the screen of your phone, such as the qDslrDashboard app. These often give a better preview than the screen built-in to the camera and may even allow you to use the touch screen to focus more quickly on a specific part of the picture. A regular USB cable is not suitable for this type of app, you need to buy a USB On-The-Go (OTG) cable.
Screenshot of qDslrDashboard app on a smartphone, controlling a DSLR camera If you plan to copy the video from the camera to a computer at the event, you will need to make sure you have a fast memory card reader. The memory card readers in some laptops are quite slow and others can be quite fast so you may not need to buy an extra card reader. Camera audio Most cameras, including DSLRs, have a built-in microphone and a socket for connecting an external microphone. The built-in microphones obtain very poor quality sound. For many events, it is much better to have independent microphones, such as a lapel microphone attached to a smartphone or wireless transmitter. Those solutions are described in part one of this blog series. Nonetheless, there are still some benefits of capturing audio in the camera. The biggest benefit is the time synchronization: if you have audio recordings in other devices, you will need to align them with the video using post-production software. If the camera recorded an audio stream too, even if the quality is not very good, you can visualize the waveform on screen and use it to align the other audio recordings much more easily and precisely. If the camera will be very close to the people speaking then it may be acceptable to use a microphone mounted on the camera. This will be convenient for post-production because the audio will be synchronized with the video. It may still not be as good as a lapel microphone though, but the quality of these camera-mounted microphones is still far higher than the built-in microphones. I've been trying the Rode VideoMic Pro, it is definitely better than recording with the built-in microphone on the camera and also better than the built-in microphone on my phone. One problem that most people encounter is the sound of the lens autofocus mechanism being detected by the microphone. This occurs with both the built-in microphone and any other microphone you mount on the camera. A microphone mounted on top of the camera doesn't detect this noise with the same intensity as the built-in microphone but it is still present in the recordings. If using a camera-mounted microphone to detect the audio from an audience, you may need to have an omnidirectional microphone. Many camera-mounted microphones are directional and will not detect very much sound from the sides or behind the camera. When using any type of external microphone with the camera, it is recommend to disable automatic gain control (AGC) in the camera settings and then manually adjust the microphone sensitivity/volume level. Use headphones A final word on audio - most good cameras have an audio output socket. Connect headphones and wear them, to make sure you are always capturing audio. Otherwise, if the microphone's battery goes flat or if a wireless microphone goes out of range you may not notice. Choosing a lens The more light you get, the better. Bigger and more expensive lenses allow more light into the camera. Many of the normal lenses sold with a DSLR camera are acceptable but if it is a special occasion you may want to rent a more expensive lens for the day. If you already have a lens, it is a very good idea to test it in conditions similar to those you expect for the event you want to record. Recording duration limits Most DSLR cameras with video capability impose a 30 minute maximum recording duration This is basically the result of a friendly arrangement between movie studios and politicians to charge an extra tax on video recording technology and potentially make the movie studio bosses richer, supposedly justified by the fact that a tiny but exaggerated number of people use their cameras to record movies at the cinema. As a consequence, most DSLR manufacturers limit the duration of video recording so their product won't be subject to the tax, ensuring the retail price is lower and more attractive. On top of this, many DSLR cameras also have a 4GB file size limit if they use the FAT filesystem. Recording 1080p video at a high frame rate may hit the file size limit in 10 minutes, well before you encounter the 30 minute maximum recording duration. To deal with the file size issue, you can record at 720p instead of 1080p and use the frame rate 24fps. For events longer than 30 minutes or where you really want 1080p or a higher frame rate, there are some other options you can consider:
  • Buy or rent a proper video camera instead of using a DSLR camera
  • Using multiple cameras that can stop and be restarted at different times.
  • Manually stopping and restarting the camera if there are breaks in the event where it is safe to do so.
  • Use an app to control the camera and program it to immediately restart the recording each time it stops
  • Extract the raw output from the camera's HDMI socket and record into some other device or computer. There are several purpose-built devices
    that can be used this way with an embedded SSD for storage.
  • There are also some people distributing unofficial/alternative firmware images that remove the artificial 30 minute recording limit.
Camera settings There are many online tutorials and demonstration videos on YouTube that will help you optimize the camera settings for video. You may have already made recordings using the automatic mode. Adjusting some or all of the settings manually may help you create a more optimal recording. You will need to spend some time familiarizing yourself with the settings first. The first thing to check is white balance. This tells the camera the type of lighting in the location. If you set this incorrectly then the colours will be distorted. Many cameras have the ability to automatically set the white balance. For video, you may be able to change one or more settings that control the recording quality. These settings control the file compression ratio and image size. Typical image size settings are 720p and 1080p. Compression ratio may be controlled by a high/medium/low quality setting. Choosing the highest quality and biggest picture requires more space on the memory card and also means you reach the 4GB file size limit more quickly. A higher quality setting also implies a faster memory card is required, because the rate of megabytes per second written to the memory card is higher. Next you need to think about the frame rate. Events that involve fast moving subjects, such as sports, typically benefit from a higher frame rate. For other events it is quite acceptable to use 24 frames per second (fps). Higher frame rates also imply bigger file size and a requirement for a faster memory card. Once you have decided on the frame rate, the next thing to do is set the shutter speed. Use a shutter speed that is double the frame rate. For example, if using 24fps or 25fps, use a 1/50 shutter speed. The final two settings you need to adjust are the ISO and aperture. Set these based on the lighting conditions and extent to which the subjects are moving. For example, if the setting is dark or if you are trying to record fast moving subjects like athletes, vehicles or animals, use an ISO value of 800 or higher. Once you have chosen ISO, adjust the aperture to ensure the picture is sufficiently illuminated. Aperture also has a significant impact on the depth of field. Operating the camera: zoom and focus Many people use zoom lenses. It is not always necessary to change the zoom while recording a video, you can use software to zoom in and out on individual parts of the picture when editing it in post-production. If you do change the zoom while recording, it may be more difficult to maintain focus. Almost all lenses support manual focus (turning the focus ring by hand) and many support automatic focus. When shooting photographs with a DSLR, the mirror is down and the camera can use dedicated sensors for focus and light sensing. When shooting video, the mirror is up and the camera can not use the same focus sensors that are used in photography. Video recording uses a digital focussing algorithm based on contrast in the picture. If you take a lot of photos you are probably quite accustomed to the fast and precise autofocus for photography and you will notice that keeping a video in focus is more challenging. As mentioned already, one of the first things you can do to keep focus simple is to avoid zooming while recording. Record in a higher resolution than you require and then zoom with software later. Some people record using 4k resolution even when they only want to produce a 720p video, as they can digitally zoom in to different parts of the 4k recording without losing detail. If the subject is very stationary (people sitting at a desk for an interview is a typical example), you may be able to set the camera to manual focus and not change it at all while recording. If you choose to enable autofocus while recording, any built-in camera microphone or microphone mounted on the camera is likely to detect sounds from the motorized focus system. Ultimately, the autofocus mechanism is not accurate for all subjects and you may be unable to stop them moving around so you will need to change the focus manually while recording. It requires some practice to be able to do this quickly without overshooting the right focus. To make life more tricky, Nikon and Canon focus rings rotate in the opposite direction, so if you are proficient using one brand you may feel awkward if you ever have to use the other. A good way to practice this skill is to practice while in the car or on the train, pointing at different subjects outside the window and trying to stay in focus as you move from one subject to the next. Make a trial run Many events, from weddings right up to the Olympic Games opening ceremony, have a trial run the day before. One reason for that is to test the locations and settings of all the recording and broadcasting equipment. If a trial run isn't possible for your event, you may find some similar event to practice recording and test your equipment. For example, if you are planning to record a wedding, you could try and record a Sunday mass in the same church. Backup and duplicate the recordings before leaving the event If you only have one copy of the recordings and the equipment is stolen or damaged you may be very disappointed. Before your event, make a plan to duplicate the raw audio and video recordings so that several people can take copies away with them. Decide in advance who will be responsible for this, ensure there will be several portable hard disks and estimate how much time it will take to prepare the copies and factor this into the schedule. Conclusion All the products described can be easily purchased from online retailers. You may not need every accessory that is mentioned as it depends on the type of event you record. The total cost of buying or renting the necessary accessories may be as much as the cost of the camera itself so if you are new to this you may need to think carefully about making a budget with a spreadsheet to do it correctly. Becoming familiar with the camera controls and practicing the techniques for manual focus and zoom can take weeks or months. If you enjoy photography this can be time well spent but if you don't enjoy it then you may not want to commit the time necessary to make good quality video. Don't rule out options like renting equipment instead of buying it or crowdsourcing, asking several participants or friends to help make recordings with their own equipment. For many events, audio is far more indispensable than video and as emphasizing at the beginning of this article, it is recommended that you should be one hundred percent confident in your strategy for recording audio before you start planning to record video.

7 March 2015

Holger Levsen: 20150307-reproducible-video

Video from the reproducible builds talk at FOSDEM 2015 and testing experimental The video from Lunar's and my talk Stretching out for trustworthy reproducible builds (follow this link if you're interested in the slides) given at the recent FOSDEM is finally available for download at both the FOSDEM video archive as well as the Debian video archive. (Update: the Debian video archive has a hires version now, I suspect the FOSDEM archive will follow soon.) To encourage maintainers uploading fixes to experimental and thanks to lots of work from Mattia Rizzolo we are now also testing packages in experimental for reproducibility! And even better: behind the scenes the site is prepared for stretch! Should you have wondered about the decrease in sid since we've been given the talk in early February: this is because we've added some more variations to the second build. Most notably building with a timezone difference of 25h lead to the decrease in the graphs showing the current prospects of reproducible builds... More work and joy to come!

18 November 2014

Antoine Beaupr : bup vs attic silly benchmark

after see attic introduced in a discussion about bup, i figured out i could give it a try. it was answering two of my biggest concerns with bup: and seemed to magically out of nowhere and basically do everything i need, with an inline manual on top of it.

disclaimer Note: this is not a real benchmark! i would probably need to port bup and attic to liw's seivot software to report on this properly (and that would amazing and really interesting, but it's late now). even worse, this was done on a production server with other stuff going on so take results with a grain of salt.

procedure and results Here's what I did. I setup backups of my ridiculously huge ~/src directory on the external hard drive where I usually make my backups. I ran a clean backup with attic, than redid it, then I ran a similar backup with bup, then redid it. Here are the results:
anarcat@marcos:~$ sudo apt-get install attic # this installed 0.13 on debian jessie amd64
[...]
anarcat@marcos:~$ attic init /mnt/attic-test:
Initializing repository at "/media/anarcat/calyx/attic-test"
Encryption NOT enabled.
Use the "--encryption=passphrase keyfile" to enable encryption.
anarcat@marcos:~$ time attic create --stats /mnt/attic-test::src ~/src/
Initializing cache...
------------------------------------------------------------------------------
Archive name: src
Archive fingerprint: 7bdcea8a101dc233d7c122e3f69e67e5b03dbb62596d0b70f5b0759d446d9ed0
Start time: Tue Nov 18 00:42:52 2014
End time: Tue Nov 18 00:54:00 2014
Duration: 11 minutes 8.26 seconds
Number of files: 283910
                       Original size      Compressed size    Deduplicated size
This archive:                6.74 GB              4.27 GB              2.99 GB
All archives:                6.74 GB              4.27 GB              2.99 GB
------------------------------------------------------------------------------
311.60user 68.28system 11:08.49elapsed 56%CPU (0avgtext+0avgdata 122824maxresident)k
15279400inputs+6788816outputs (0major+3258848minor)pagefaults 0swaps
anarcat@marcos:~$ time attic create --stats /mnt/attic-test::src-2014-11-18 ~/src/
------------------------------------------------------------------------------
Archive name: src-2014-11-18
Archive fingerprint: be840f1a49b1deb76aea1cb667d812511943cfb7fee67f0dddc57368bd61c4bf
Start time: Tue Nov 18 00:05:57 2014
End time: Tue Nov 18 00:06:35 2014
Duration: 38.15 seconds
Number of files: 283910
                       Original size      Compressed size    Deduplicated size
This archive:                6.74 GB              4.27 GB            116.63 kB
All archives:               13.47 GB              8.54 GB              3.00 GB
------------------------------------------------------------------------------
30.60user 4.66system 0:38.38elapsed 91%CPU (0avgtext+0avgdata 104688maxresident)k
18264inputs+258696outputs (0major+36892minor)pagefaults 0swaps
anarcat@marcos:~$ sudo apt-get install bup # this installed bup 0.25
anarcat@marcos:~$ free && sync && echo 3   sudo tee /proc/sys/vm/drop_caches && free # flush caches
anarcat@marcos:~$ export BUP_DIR=/mnt/bup-test
anarcat@marcos:~$ bup init
D p t Git vide initialis  dans /mnt/bup-test/
anarcat@marcos:~$ time bup index ~/src
Indexing: 345249, done.
56.57user 14.37system 1:45.29elapsed 67%CPU (0avgtext+0avgdata 85236maxresident)k
699920inputs+104624outputs (4major+25970minor)pagefaults 0swaps
anarcat@marcos:~$ time bup save -n src ~/src
Reading index: 345249, done.
bloom: creating from 1 file (200000 objects).
bloom: adding 1 file (200000 objects).
bloom: creating from 3 files (600000 objects).
Saving: 100.00% (6749592/6749592k, 345249/345249 files), done.
bloom: adding 1 file (126005 objects).
383.08user 61.37system 10:52.68elapsed 68%CPU (0avgtext+0avgdata 194256maxresident)k
14638104inputs+5944384outputs (50major+299868minor)pagefaults 0swaps
anarcat@marcos:attic$ time bup index ~/src
Indexing: 345249, done.
56.13user 13.08system 1:38.65elapsed 70%CPU (0avgtext+0avgdata 133848maxresident)k
806144inputs+104824outputs (137major+38463minor)pagefaults 0swaps
anarcat@marcos:attic$ time bup save -n src2 ~/src
Reading index: 1, done.
Saving: 100.00% (0/0k, 1/1 files), done.
bloom: adding 1 file (1 object).
0.22user 0.05system 0:00.66elapsed 42%CPU (0avgtext+0avgdata 17088maxresident)k
10088inputs+88outputs (39major+15194minor)pagefaults 0swaps
Disk usage is comparable:
anarcat@marcos:attic$ du -sc /mnt/*attic*
2943532K        /mnt/attic-test
2969544K        /mnt/bup-test
People are encouraged to try and reproduce those results, which should be fairly trivial.

Observations Here are interesting things I noted while working with both tools:
  • attic is Python3: i could compile it, with dependencies, by doing apt-get build-dep attic and running setup.py - i could also install it with pip if i needed to (but i didn't)
  • bup is Python 2, and has a scary makefile
  • both have an init command that basically does almost nothing and takes little enough time that i'm ignoring it in the benchmarks
  • attic backups are a single command, bup requires me to know that i first want to index and then save, which is a little confusing
  • bup has nice progress information, especially during save (because when it loaded the index, it knew how much was remaining) - just because of that, bup "feels" faster
  • bup, however, lets me know about its deep internals (like now i know it uses a bloom filter) which is probably barely understandable by most people
  • on the contrary, attic gives me useful information about the size of my backups, including the size of the current increment
  • it is not possible to get that information from bup, even after the fact - you need to du before and after the backup
  • attic modifies the files access times when backing up, while bup is more careful (there's a pull request to fix this in attic, which is how i found out about this)
  • both backup systems seem to produce roughly the same data size from the same input

Summary attic and bup are about equally fast. bup took 30 seconds less than attic to save the files, but that's not counting the 1m45s it took indexing them, so on the total run time, bup was actually slower. attic is also (almost) two times faster on the second run as well. but this could be within the margin of error of this very quick experiment, so my provisional verdict for now would be that they are about as fast. bup may be more robust (for example it doesn't modify the atimes), but this has not been extensively tested and is more based with my familiarity with the "conservatism" of the bup team rather than actual tests. considering all the features promised by attic, it makes for a really serious contender to the already amazing bup.

Next steps The properly do this, we would need to:
  • include other software (thinking of Zbackup, Burp, ddar, obnam, rdiff-backup and duplicity)
  • bench attic with the noatime patch
  • bench dev attic vs dev bup
  • bench data removal
  • bench encryption
  • test data recovery
  • run multiple backup runs, on different datasets, on a cleaner environment
  • ideally, extend seivot to do all of that
Note that the Burp author already did an impressive comparative benchmark of a bunch of those tools for the burp2 design paper, but it unfortunately doesn't include attic or clear ways to reproduce the results.

7 May 2014

Julien Danjou: Making of The Hacker's Guide to Python

As promised, today I would like to write a bit about the making of The Hacker's Guide to Python. It has been a very interesting experimentation, and I think it is worth sharing it with you. The inspiration All started out at the beginning of August 2013. I was spending my summer, as the rest of the year, hacking on OpenStack. As years passed, I got more and more deeply involved in the various tools that we either built or contributed to within the OpenStack community. And I somehow got the feeling that my experience with Python, the way we used it inside OpenStack and other applications during these last years was worth sharing. Worth writing something bigger than a few blog posts. The OpenStack project is doing code reviews, and therefore so did I for almost two years. That inspired a lot of topics, like the definitive guide to method decorators that I wrote at the time I started the hacker's guide. Stumbling upon the same mistakes or misunderstanding over and over is, somehow, inspiring. I also stumbled upon Nathan Barry's blog and book Authority which were very helpful to get started and some sort of guideline. All of that brought me enough ideas to start writing a book about Python software development for people already familiar with the language. The writing The first thing I started to do is to list all the topics I wanted to write about. The list turned out to have subjects that had no direct interest for a practical guide. For example, on one hand, very few developers know in details how metaclasses work, but on the other hand, I never had to write a metaclass during these last years. That's the kind of subject I decided not to write about, dropped all subjects that I felt were not going to help my reader to be more productive. Even if they could be technically interesting.
Then, I gathered all problems I saw during the code reviews I did during these last two years. Some of them I only recalled in the days following the beginning of that project. But I kept adding them to the table of contents, reorganizing stuff as needed. After a couple of weeks, I had a pretty good overview of the contents that there I will write about. All I had to do was to fill in the blank (that sounds so simple now). The entire writing of the took hundred hours spread from August to November, during my spare time. I had to stop all my other side projects for that. The interviews While writing the book, I tried to parallelize every thing I could. That included asking people for interviews to be included in the book. I already had a pretty good list of the people I wanted to feature in the book, so I took some time as soon as possible to ask them, and send them detailed questions. I discovered two categories of interviewees. Some of them were very fast to answer ( 1 week), and others were much, much slower. A couple of them even set up Git repositories to answer the questions, because that probably looked like an entire project to them. :-) So I had to not lose sight and kindly ask from time to time if everything was alright, and at some point started to kindly set some deadline. In the end, the quality of the answers was awesome, and I like to think that was because I picked the right people! The proof-reading Once the book was finished, I somehow needed to have people proof-reading it. This was probably the hardest part of this experiment. I needed two different types of reviews: technical reviews, to check that the content was correct and interesting, and language review. That one is even more important since English is not my native language. Finding technical reviewers seemed easy at first, as I had ton of contacts that I identified as being able to review the book. I started by asking a few people if they would be comfortable reading a simple chapter and giving me feedbacks. I started to do that in September: having the writing and the reviews done in parallel was important to me in order to minimize latency and the book's release delay. All people I contacted answered positively that they would be interested in doing a technical review of a chapter. So I started to send chapters to them. But in the end, only 20% replied back. And even after that, a large portion stopped reviewing after a couple of chapters. Don't get me wrong: you can't be mad at people not wanting to spend their spare time in book edition like you do. However, from the few people that gave their time to review a few chapters, I got tremendous feedback, at all level. That's something that was very important and that helped a lot getting confident. Writing a book alone for months without having anyone taking a look upon your shoulder can make you doubt that you are creating something worth it. As far as English proof-reading, I went ahead and used ODesk to recruit a professional proof-reader. I looked for people with the right skills: a good English level (being a native English speaker at least), be able to understand what the book was about, and being able to work with correct delays. I had mixed results from the people I hired, but I guess that's normal. The only error I made was not to parallelize those reviews enough, so I probably lost a couple of months on that. The toolchain
While writing the book, I did a few breaks to build a toolchain. What I call a toolchain is set of tools used to render the final PDF, EPUB and MOBI files of the guide. After some research, I decided to settle on AsciiDoc, using the DocBook output, which is then being transformed to LaTeX, and then to PDF, or either to EPUB directly. I realy on Calibre to convert the EPUB file to MOBI. It took me a few hours to do what I wanted, using some magic LaTeX tricks to have a proper render, but it was worth it and I'm particularly happy with the result. For the cover design, I asked my talented friend Nicolas to do something for me, and he designed the wonderful cover and its little snake! The publishing Publishing in an interesting topic people kept asking me about. This is what I had to answer a few dozens of time: I never had any plan for asking an editor to publish this book. Nowadays, asking an editor to publish a book feels to me like asking a major company to publish a CD. It feels awkward. However, don't get me wrong: there can be a few upsides of having an editor. They will find reviewers and review your book for you. Having the book review handled for you is probably a very good thing, considering how it was hard to me to get that in place. It can be especially important for a technical book. Also, your book may end up in brick and mortar stores and be part of a collection, both improving visibility. That may improve your book's selling, though the editor and all the intermediaries are going to keep the largest amount of the money anyway. I've heard good stories about people using Gumroad to sell electronic contents, so after looking for competitors in that market, I picked them. I also had the idea to sell the book with Bitcoins, so I settled on Coinbase, because they have a nice API to do that. Setting up everything was quite straight-forward, especially with Gumroad. It only took me a few hours to do so. Writing the Coinbase application took a few hours too. My initial plan was to only sell online an electronic version. On the other hand, since I kept hearing that a printed version should exist, I decided to give it a try. I chose to work with Lulu because I knew people using it, and it was pretty simple to set up. The launch Once I had everything ready, I built the selling page and connected everything between Mailchimp, Gumroad, Coinbase, Google Analytics, etc. Writing the launch email was really exciting. I used Mailchimp feature to send the launch mail in several batches, just to have some margin in case of a sudden last minute problem. But everything went fine. Hurrah! I distributed around 200 copies of the ebook in the first 48 hours, for about $5000. That covered all the cost I had from the writing the book, and even more, so I was already pretty happy with the launch.
Retrospective In retrospect, something that I didn't do the best way possible is probably to build a solid mailing list of people interested, and to build an important anticipation and incentive to buy the book at launch date. My mailing list counted around 1500 people subscribed because they were interested in the launch of the book subscribed; in the end, probably only 10-15% of them bought the book during the launch, which is probably a bit lower than what I could expect. But more than a month later, I distributed in total almost 500 copies of the book (including physical units) for more than $10000, so I tend to think that this was a success. I still sell a few copies of the book each weeks, but the number are small compared to the launch. I sold less than 10 copies of the ebook using Bitcoins, and I admit I'm a bit disappointed and surprised about that. Physical copies represent 10% of the book distribution. It's probably a lot lower than most people that pushed me to do it thought it would be. But it is still higher of what I thought it would be. So I still would advise to have a paperback version of your book. At least because it's nice to have it in your library.
I only got positive feedbacks, a few typo notices, and absolutely no refund demand, which I really find amazing. The good news is also that I've been contacted with a couple of Korean and Chinese editors to get the book translated and published in those countries. If everything goes well, the book should be translated in the upcoming months and be available on these markets in 2015! If you didn't get a copy, it's still time to do so!

1 April 2014

Thorsten Glaser: Sorry about the MediaWiki-related breakage in wheezy

I would like to publicly apologise for the inconvenience caused by my recent updates to the mediawiki and mediawiki-extensions source packages in Debian wheezy (stable-security). As for reasons I m doing Mediawiki-related work at my dayjob, as part of FusionForge/Evolvis development, and try to upstream as much as I can. Our production environment is a Debian wheezy-based system with a selection of newer packages, including MediaWiki from sid (although I also have a test system running sid, so my uploads to Debian are generally better tested). I haven t had experience with stable-security uploads before, and made small oversights (and did not run the full series of tests on the final , permitted-to-upload, version, only beforehand) which led to the problems. The situation was a bit complicated by the need to update the two packages in lockstep, to fight an RC bug file/symlink conflict, which was hard enough to do in sid already, plus the desire to fix other possibly-RC bugs at the same time. I also got no external review, although I cannot blame anyone since I never asked anyone explicitly, so I accept this as my fault. The issues with the updates are: My unfamiliarity with some of the packaging concepts used here, combined with this being something I do during $dayjob (which can limit the time I can invest, although I m doing much more work on Mediawiki in Debian than I thought I could do on the job), can cause some of those oversights. I guess I also should install a vanilla wheezy environment somewhere for testing I do not normally do stable uploads (jmw did them before), so I was not prepared for that. And, while here: thanks to the Debian Security Team for putting up with me (also in this week s FusionForge issue), and thanks to Mediawiki upstream for agreeing to support releases shipped in Debian stable for longer support, so we can easily do stable-security updates.

30 March 2014

Olivier Berger: Working on a TWiki to MediaWiki converter (targetting FusionForge wikis)

I m currently working on a wiki converter allowing me to transfer old TWiki wikis (hosted on picoforge) to MediaWikis hosted on FusionForge. Unlike existing tools that I ve found that more or less target the same needs, mine will address two peculiarities : The tool is written in Python, and will include my own crappy wiki syntax converter in Python, instead of spawning existing Perl scripts, as others did. It may happen to work for FosWiki too, but I don t intend to use it beyond our old TWiki installations, for a start. Stay tuned for more progress updates. Edit: I ve now released a first version.

7 March 2014

Keith Packard: glamor-hacking

Brief Glamor Hacks Eric Anholt started writing Glamor a few years ago. The goal was to provide credible 2D acceleration based solely on the OpenGL API, in particular, to implement the X drawing primitives, both core and Render extension, without any GPU-specific code. When he started, the thinking was that fixed-function devices were still relevant, so that original code didn t insist upon modern OpenGL features like GLSL shaders. That made the code less efficient and hard to write. Glamor used to be a side-project within the X world; seen as something that really wasn t very useful; something that any credible 2D driver would replace with custom highly-optimized GPU-specific code. Eric and I both hoped that Glamor would turn into something credible and that we d be able to eliminate all of the horror-show GPU-specific code in every driver for drawing X text, rectangles and composited images. That hadn t happened though, until now. Fast forward to the last six months. Eric has spent a bunch of time cleaning up Glamor internals, and in fact he s had it merged into the core X server for version 1.16 which will be coming up this July. Within the Glamor code base, he s been cleaning some internal structures up and making life more tolerable for Glamor developers. Using libepoxy A big part of the cleanup was a transition all of the extension function calls to use his other new project, libepoxy, which provides a sane, consistent and performant API to OpenGL extensions for Linux, Mac OS and Windows. That library is awesome, and you should use it for everything you do with OpenGL because not using it is like pounding nails into your head. Or eating non-tasty pie. Using VBOs in Glamor One thing he recently cleaned up was how to deal with VBOs during X operations. VBOs are absolutely essential to modern OpenGL applications; they re really the only way to efficiently pass vertex data from application to the GPU. And, every other mechanism is deprecated by the ARB as not a part of the blessed core context . Glamor provides a simple way of getting some VBO space, dumping data into it, and then using it through two wrapping calls which you use along with glVertexAttribPointer as follows:
pointer = glamor_get_vbo_space(screen, size, &offset);
glVertexAttribPointer(attribute_location, count, type,
              GL_FALSE, stride, offset);
memcpy(pointer, data, size);
glamor_put_vbo_space(screen);
glamor_get_vbo_space allocates the specified amount of VBO space and returns a pointer to that along with an offset , which is suitable to pass to glVertexAttribPointer. You dump your data into the returned pointer, call glamor_put_vbo_space and you re all done. Actually Drawing Stuff At the same time, Eric has been optimizing some of the existing rendering code. But, all of it is still frankly terrible. Our dream of credible 2D graphics through OpenGL just wasn t being realized at all. On Monday, I decided that I should go play in Glamor for a few days, both to hack up some simple rendering paths and to familiarize myself with the insides of Glamor as I m getting asked to review piles of patches for it, and not understanding a code base is a great way to help introduce serious bugs during review. I started with the core text operations. Not because they re particularly relevant these days as most applications draw text with the Render extension to provide better looking results, but instead because they re often one of the hardest things to do efficiently with a heavy weight GPU interface, and OpenGL can be amazingly heavy weight if you let it. Eric spent a bunch of time optimizing the existing text code to try and make it faster, but at the bottom, it actually draws each lit pixel as a tiny GL_POINT object by sending a separate x/y vertex value to the GPU (using the above VBO interface). This code walks the array of bits in the font and checking each one to see if it is lit, then checking if the lit pixel is within the clip region and only then adding the coordinates of the lit pixel to the VBO. The amazing thing is that even with all of this CPU and GPU work, the venerable 6x13 font is drawn at an astonishing 3.2 million glyphs per second. Of course, pure software draws text at 9.3 million glyphs per second. I suspected that a more efficient implementation might be able to draw text a bit faster, so I decided to just start from scratch with a new GL-based core X text drawing function. The plan was pretty simple:
  1. Dump all glyphs in the font into a texture. Store them in 1bpp format to minimize memory consumption.
  2. Place raw (integer) glyph coordinates into the VBO. Place four coordinates for each and draw a GL_QUAD for each glyph.
  3. Transform the glyph coordinates into the usual GL range (-1..1) in the vertex shader.
  4. Fetch a suitable byte from the glyph texture, extract a single bit and then either draw a solid color or discard the fragment.
This makes the X server code surprisingly simple; it computes integer coordinates for the glyph destination and glyph image source and writes those to the VBO. When all of the glyphs are in the VBO, it just calls glDrawArrays(GL_QUADS, 0, 4 * count). The results were encouraging :
1: fb-text.perf
2: glamor-text.perf
3: keith-text.perf
       1                 2                           3                 Operation
------------   -------------------------   -------------------------   -------------------------
   9300000.0      3160000.0 (     0.340)     18000000.0 (     1.935)   Char in 80-char line (6x13) 
   8700000.0      2850000.0 (     0.328)     16500000.0 (     1.897)   Char in 70-char line (8x13) 
   6560000.0      2380000.0 (     0.363)     11900000.0 (     1.814)   Char in 60-char line (9x15) 
   2150000.0       700000.0 (     0.326)      7710000.0 (     3.586)   Char16 in 40-char line (k14) 
    894000.0       283000.0 (     0.317)      4500000.0 (     5.034)   Char16 in 23-char line (k24) 
   9170000.0      4400000.0 (     0.480)     17300000.0 (     1.887)   Char in 80-char line (TR 10) 
   3080000.0      1090000.0 (     0.354)      7810000.0 (     2.536)   Char in 30-char line (TR 24) 
   6690000.0      2640000.0 (     0.395)      5180000.0 (     0.774)   Char in 20/40/20 line (6x13, TR 10) 
   1160000.0       351000.0 (     0.303)      2080000.0 (     1.793)   Char16 in 7/14/7 line (k14, k24) 
   8310000.0      2880000.0 (     0.347)     15600000.0 (     1.877)   Char in 80-char image line (6x13) 
   7510000.0      2550000.0 (     0.340)     12900000.0 (     1.718)   Char in 70-char image line (8x13) 
   5650000.0      2090000.0 (     0.370)     11400000.0 (     2.018)   Char in 60-char image line (9x15) 
   2000000.0       670000.0 (     0.335)      7780000.0 (     3.890)   Char16 in 40-char image line (k14) 
    823000.0       270000.0 (     0.328)      4470000.0 (     5.431)   Char16 in 23-char image line (k24) 
   8500000.0      3710000.0 (     0.436)      8250000.0 (     0.971)   Char in 80-char image line (TR 10) 
   2620000.0       983000.0 (     0.375)      3650000.0 (     1.393)   Char in 30-char image line (TR 24)
This is our old friend x11perfcomp, but slightly adjusted for a modern reality where you really do end up drawing billions of objects (hence the wider columns). This table lists the performance for drawing a range of different fonts in both poly text and image text variants. The first column is for Xephyr using software (fb) rendering, the second is for the existing Glamor GL_POINT based code and the third is the latest GL_QUAD based code. As you can see, drawing points for every lit pixel in a glyph is surprisingly fast, but only about 1/3 the speed of software for essentially any size glyph. By minimizing the use of the CPU and pushing piles of work into the GPU, we manage to increase the speed of most of the operations, with larger glyphs improving significantly more than smaller glyphs. Now, you ask how much code this involved. And, I can truthfully say that it was a very small amount to write:
 Makefile.am             2 
 glamor.c                5 
 glamor_core.c           8 
 glamor_font.c         181 ++++++++++++++++++++
 glamor_font.h          50 +++++
 glamor_priv.h          26 ++
 glamor_text.c         472 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 glamor_transform.c      2 
 8 files changed, 741 insertions(+), 5 deletions(-)
Let s Start At The Very Beginning The results of optimizing text encouraged me to start at the top of x11perf and see what progress I could make. In particular, looking at the current Glamor code, I noticed that it did all of the vertex transformation with the CPU. That makes no sense at all for any GPU built in the last decade; they ve got massive numbers of transistors dedicated to performing precisely this kind of operation. So, I decided to see what I could do with PolyPoint. PolyPoint is absolutely brutal on any GPU; you have to pass it two coordinates for each pixel, and so the very best you can do is send it 32 bits, or precisely the same amount of data needed to actually draw a pixel on the frame buffer. With this in mind, one expects that about the best you can do compared with software is tie. Of course, the CPU version is actually computing an address and clipping, but those are all easily buried in the cost of actually storing a pixel. In any case, the results of this little exercise are pretty close to a tie the CPU draws 190,000,000 dots per second and the GPU draws 189,000,000 dots per second. Looking at the vertex and fragment shaders generated by the compiler, it s clear that there s room for improvement. The fragment shader is simply pulling the constant pixel color from a uniform and assigning it to the fragment color in this the simplest of all possible shaders:
uniform vec4 color;
void main()
 
       gl_FragColor = color;
 ;
This generates five instructions:
Native code for point fragment shader 7 (SIMD8 dispatch):
   START B0
   FB write target 0
0x00000000: mov(8)          g113<1>F        g2<0,1,0>F                        align1 WE_normal 1Q  ;
0x00000010: mov(8)          g114<1>F        g2.1<0,1,0>F                      align1 WE_normal 1Q  ;
0x00000020: mov(8)          g115<1>F        g2.2<0,1,0>F                      align1 WE_normal 1Q  ;
0x00000030: mov(8)          g116<1>F        g2.3<0,1,0>F                      align1 WE_normal 1Q  ;
0x00000040: sendc(8)        null            g113<8,8,1>F
                render ( RT write, 0, 4, 12) mlen 4 rlen 0        align1 WE_normal 1Q EOT  ;
   END B0
As this pattern is actually pretty common, it turns out there s a single instruction that can replace all four of the moves. That should actually make a significant difference in the run time of this shader, and this shader runs once for every single pixel. The vertex shader has some similar optimization opportunities, but it only runs once for every 8 pixels with the SIMD format flipped around, the vertex shader can compute 8 vertices in parallel, so it ends up executing 8 times less often. It s got some redundant moves, which could be optimized by improving the copy propagation analysis code in the compiler. Of course, improving the compiler to make these cases run faster will probably make a lot of other applications run faster too, so it s probably worth doing at some point. Again, the amount of code necessary to add this path was tiny:
 Makefile.am             1 
 glamor.c                2 
 glamor_polyops.c      116 ++++++++++++++++++++++++++++++++++++++++++++++++++--
 glamor_priv.h           8 +++
 glamor_transform.c    118 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 glamor_transform.h     51 ++++++++++++++++++++++
 6 files changed, 292 insertions(+), 4 deletions(-)
Discussion of Results These two cases, text and points, are probably the hardest operations to accelerate with a GPU and yet a small amount of OpenGL code was able to meet or beat software easily. The advantage of this work over traditional GPU 2D acceleration implementations should be pretty clear this same code should work well on any GPU which offers a reasonable OpenGL implementation. That means everyone shares the benefits of this code, and everyone can contribute to making 2D operations faster. All of these measurements were actually done using Xephyr, which offers a testing environment unlike any I ve ever had build and test hardware acceleration code within a nested X server, debugging it in a windowed environment on a single machine. Here s how I m running it:
$ ./Xephyr  -glamor :1 -schedMax 2000 -screen 1024x768 -retro
The one bit of magic here is the -schedMax 2000 flag, which causes Xephyr to update the screen less often when applications are very busy and serves to reduce the overhead of screen updates while running x11perf. Future Work Having managed to accelerate 17 of the 392 operations in x11perf, it s pretty clear that I could spend a bunch of time just stepping through each of the remaining ones and working on them. Before doing that, we want to try and work out some general principles about how to handle core X fill styles. Moving all of the stipple and tile computation to the GPU will help reduce the amount of code necessary to fill rectangles and spans, along with improving performance, assuming the above exercise generalizes to other primitives. Getting and Testing the Code Most of the changes here are from Eric s glamor-server branch:
git://people.freedesktop.org/~anholt/xserver glamor-server
The two patches shown above, along with a pair of simple clean up patches that I ve written this week are available here:
git://people.freedesktop.org/~keithp/xserver glamor-server
Of course, as this now uses libepoxy, you ll need to fetch, build and install that before trying to compile this X server. Because you can try all of this out in Xephyr, it s easy to download and build this X server and then run it right on top of your current driver stack inside of X. I d really like to hear from people with Radeon or nVidia hardware to know whether the code works, and how it compares with fb on the same machine, which you get when you elide the -glamor argument from the example Xephyr command line above.

27 February 2014

Simon Josefsson: Replicant 4.2 on Samsung S3

Since November 2013 I have been using Replicant on my Samsung S3 as an alternative OS. The experience has been good for everyday use. The limits (due to non-free software components) compared to a normal S3 (running vendor ROM or CyanogenMod) is lack of GPS/wifi/bluetooth/NFC/frontcamera functionality although it is easy to get some of that working again, including GPS, which is nice for my geocaching hobby. The Replicant software is stable for being an Android platform; better than my Nexus 7 (2nd generation) tablet which I got around the same time that runs an unmodified version of Android. The S3 has crashed around ten times in these four months. I ve lost track of the number of N7 crashes, especially after the upgrade to Android 4.4. I use the N7 significantly less than the S3, reinforcing my impression that Replicant is a stable Android. I have not had any other problem that I couldn t explain, and have rarely had to reboot the device. The Replicant project recently released version 4.2 and while I don t expect the release to resolve any problem for me, I decided it was time to upgrade and learn something new. I initially tried the official ROM images, and later migrated to using my own build of the software (for no particular reason other than that I could). Before the installation, I wanted to have a full backup of the phone to avoid losing data. I use SMS Backup+ to keep a backup of my call log, SMS and MMS on my own IMAP server. I use oandbackup to take a backup of all software and settings on the phone. I use DAVDroid for my contacts and calendar (using a Radicale server), and reluctantly still use aCal in order to access my Google Calendar (because Google does not implement RFC 5397 properly so it doesn t work with DAVDroid). Alas all that software is not sufficient for backup purposes, for example photos are still not copied elsewhere. In order to have a complete backup of the phone, I m using rsync over the android debug bridge (adb). More precisely, I connect the phone using a USB cable, push a rsyncd configuration file, start the rsync daemon on the phone, forward the TCP/IP port, and then launch rsync locally. The following commands are used: jas@latte:~$ cat rsyncd.conf
address 127.0.0.1
uid = root
gid = root
[root]
path = /
jas@latte:~$ adb push rsyncd.conf /extSdCard/rsyncd.conf
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
0 KB/s (57 bytes in 0.059s)
jas@latte:~$ adb root
jas@latte:~$ adb shell rsync --daemon --no-detach --config=/extSdCard/rsyncd.conf &
jas@latte:~$ adb forward tcp:6010 tcp:873
jas@latte:~$ sudo rsync -av --delete --exclude /dev --exclude /acct --exclude /sys --exclude /proc rsync://localhost:6010/root/ /root/s3-bup/
...
Now feeling safe that I would not lose any data, I remove the SIM card from my phone (to avoid having calls, SMS or cell data interrupt during the installation) and follow the Replicant Samsung S3 installation documentation. Installation was straightforward. I booted up the newly installed ROM and familiarized myself with it. My first reaction was that the graphics felt a bit slower compared to Replicant 4.0, but it is hard to tell for certain. After installation, I took a quick rsync backup of the freshly installed phone, to have a starting point for future backups. Since my IMAP and CardDav/CalDav servers use certificates signed by CACert I first had to install the CACert trust anchors, to get SMS Backup+ and DAVDroid to connect. For some reason it was not sufficient to add only the root CACert certificate, so I had to add the intermediate CA cert as well. To load the certs, I invoke the following commands, selecting Install from SD Card when the menu is invoked (twice). adb push root.crt /sdcard/
adb shell am start -n "com.android.settings/.Settings\"\$\"SecuritySettingsActivity"
adb push class3.crt /sdcard/
adb shell am start -n "com.android.settings/.Settings\"\$\"SecuritySettingsActivity"
I restore apps with oandbackup, and I select a set of important apps that I want restored with settings preserved, including aCal, K9, Xabber, c:geo, OsmAnd~, NewsBlur, Google Authenticator. I install SMS Backup+ from FDroid separately and configure it, SMS Backup+ doesn t seem to want to restore anything if the app was restored with settings using oandbackup. I install and configure the DAVdroid account with the server URL, and watch it populate my address book and calendar with information. After organizing the icons on the launcher screen, and changing the wallpaper, I m up and running with Replicant 4.2. This upgrade effort took me around two evenings to complete, with around half of the time consumed by exploring different ways to do the rsync backup before I settled on the rsync daemon approach. Compared to the last time, when I spent almost two weeks researching various options and preparing for the install, this felt like a swift process.
I spent some time researching how to get the various non-free components running. This is of course sub-optimal, and the Replicant project does not endorse non-free software. Alas there aren t any devices out there that meets my requirements and use only free software. Personally, I feel using a free core OS like Replicant and then adding some non-free components back is a better approach than using CyanogenMod directly, or (horror) the stock ROM. Even better is of course to not add these components back, but you have to decide for yourselves which trade-offs you want to make. The Replicant wiki has a somewhat outdated page on Samsung S3 firmware. Below are my notes for each component, which applies to Replicant 4.2 0001. You need to first prepare your device a bit using these commands, and it is a good idea to reboot the device after installing the files. adb root
adb shell mount -o rw,remount /system
adb shell mkdir /system/vendor/firmware
adb shell chmod 755 /system/vendor/firmware
GPS: The required files are the same as for Replicant 4.0, and using the files from CyanogenMod 10.1.3 works fine. The following commands load them onto the device. Note that this will load code that will execute on your main CPU which is particularly bothersome. There seems to exist a number of different versions of these files, CyanogenMod have the same gpsd and gps.exynos4.so in version 10.1.3 and 10.2 but the libsecril-client.so differs between 10.1.3 and 10.2. All files differ from the files I got with my stock Samsung ROM on this device (MD5 checksums in my previous blog). I have not investigated how these versions differs or which of them should be recommended. I use the files from CyanogenMod 10.1.3 because it matches the Android version and because the files are easily available. adb push cm-10.1.3-i9300/system/bin/gpsd /system/bin/gpsd
adb shell chmod 755 /system/bin/gpsd
adb push cm-10.1.3-i9300/system/lib/hw/gps.exynos4.so /system/lib/hw/gps.exynos4.so
adb push cm-10.1.3-i9300/system/lib/libsecril-client.so /system/lib/libsecril-client.so
adb shell chmod 644 /system/lib/hw/gps.exynos4.so /system/lib/libsecril-client.so
Bluetooth: Only one file has to be installed, apparently firmware loaded onto the Bluetooth chip. Cyanogenmod 10.1.3 and 10.2 contains identical files, which has a string in it BCM4334B0 37.4MHz Class1.5 Samsung D2 . The file I got with my stock ROM has a string in it BCM4334B0 37.4MHz Class1.5 Samsung M0 . I don t know the difference, although I have seen that D2 sometimes refers to the US version of a Samsung device. My device is the international version, but it seems to work anyway. adb push cm-10.1.3-i9300/system/bin/bcm4334.hcd /system/vendor/firmware/bcm4334.hcd
adb shell chmod 644 /system/vendor/firmware/bcm4334.hcd
Front Camera: Two files has to be installed, apparently firmware loaded onto the Camera chip. CyanogenMod 10.1.3 and 10.2 contains identical files, which has a string in it [E4412 520-2012/08/30 17:35:56]OABH30 . The file I got with my stock ROM has a string in it [E4412 533-2012/10/06 14:38:46]OABJ06 . I don t know the difference. adb push cm-10.1.3-i9300/system/vendor/firmware/fimc_is_fw.bin /system/vendor/firmware/fimc_is_fw.bin
adb push cm-10.1.3-i9300/system/vendor/firmware/setfile.bin /system/vendor/firmware/setfile.bin
adb shell chmod 644 /system/vendor/firmware/fimc_is_fw.bin /system/vendor/firmware/setfile.bin
NFC: I m happy that I got NFC to work, this was one of my main issues with Replicant 4.0 (see my earlier blog post). Only one file is needed, however CyanogenMod does not seem to distribute it so you have to get it from your stock ROM or elsewhere. The md5 of the file I have is b9364ba59de1947d4588f588229bae20 (and no I will not send it to you). I have tested it with the YubiKey NEO and the Yubico Authenticator app. adb push clockworkmod/blobs/ee6/7188ca465cf01dd355a92685a42361e113f886ef44e96d371fdaebf57acae /system/vendor/firmware/libpn544_fw.so
adb shell chmod 644 /system/vendor/firmware/libpn544_fw.so
Wifi: I haven t gotten wifi to work, although I have not tried very hard. Loading the CyanogenMod firmwares makes my device find wireless networks, but when I try to authenticate (WPA-PSK2), I get failures. Possibly some other files has to be loaded as well. Update: This blog post has been updated since initial posting to use rsync over adb instead of USB tethering, and to mention that I got the ROM building to work. flattr this!

Next.

Previous.